AutoTransition: Learning to Recommend Video Transition Effects

نویسندگان

چکیده

AbstractVideo transition effects are widely used in video editing to connect shots for creating cohesive and visually appealing videos. However, it is challenging non-professionals choose best transitions due the lack of cinematographic knowledge design skills. In this paper, we present premier work on performing automatic recommendation (VTR): given a sequence raw companion audio, recommend each pair neighboring shots. To solve task, collect large-scale dataset using publicly available templates softwares. Then formulate VTR as multi-modal retrieval problem from vision/audio propose novel matching framework which consists two parts. First learn embedding through classification task. model correspondence inputs transitions. Specifically, proposed employs transformer fuse vision audio information, well capture context cues sequential outputs. Through both quantitative qualitative experiments, clearly demonstrate effectiveness our method. Notably, comprehensive user study, method receives comparable scores compared with professional editors while improving efficiency by 300 \(\boldsymbol{\times }\). We hope serves inspire other researchers new The codes public at https://github.com/acherstyx/AutoTransition.KeywordsVideo recommendationMulti-modal retrievalVideo

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Learning to Recommend Quotes for Writing

In this paper, we propose and address a novel task of recommending quotes for writing. Quote is short for quotation, which is the repetition of someone else’s statement or thoughts. It is a common case in our writing when we would like to cite someone’s statement, like a proverb or a statement by some famous people, to make our composition more elegant or convincing. However, sometimes we are s...

متن کامل

Learning to Recommend Accurate and Diverse Items

In this study, we investigate diversified recommendation problem by supervised learning, seeking significant improvement in diversity while maintaining accuracy. In particular, we regard each user as a training instance, and heuristically choose a subset of accurate and diverse items as groundtruth for each user. We then represent each user or item as a vector resulted from the factorization of...

متن کامل

Learning to Recommend via Inverse Optimal Matching

We consider recommendation in the context of optimal matching, i.e., we need to pair or match a user with an item in an optimal way. The framework is particularly relevant when the supply of an individual item is limited and it can only satisfy a small number of users even though it may be preferred by many. We leverage the methodology of optimal transport of discrete distributions and formulat...

متن کامل

Learning Visual Features to Recommend Grasp Configurations

This paper is a preliminary account of current work on a visual system that learns to aid in robotic grasping and manipulation tasks. Localized features are learned of the visual scene that correlate reliably with the orientation of a dextrous robotic hand during haptically guided grasps. On the basis of these features, hand configurations are recommended for future gasping operations. The lear...

متن کامل

Learning to Recommend with User Generated Content

In the era of Web 2.0, user generated content (UGC), such as social tag and user review, widely exists on the Internet. However, in recommender systems, most of existing related works only study single kind of UGC in each paper, and different types of UGC are utilized in different ways. This paper proposes a unified way to use different types of UGC to improve the prediction accuracy for recomm...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Lecture Notes in Computer Science

سال: 2022

ISSN: ['1611-3349', '0302-9743']

DOI: https://doi.org/10.1007/978-3-031-19839-7_17